Guardrail Integration
DynamoGuard policies can be applied to various models in the Dynamo AI platform. Currently, DynamoGuard is only supported for Remote Model Objects. DynamoGuard can be integrated into an LLM application using two methods: (1) Managed Inference and (2) Custom Integration.
Remote Model Objects
Remote Model Objects can be used to create a connection with any model that is provided by a third party or is already hosted and can be accessed through an API endpoint. Currently, DynamoFL supports the following providers, as well as custom endpoints: OpenAI, Azure OpenAI, AWS Sagemaker, Vertex AI, Google PaLM, Google Gemini, Mistral AI, Cohere, Anthropic, HuggingFace, Replicate, Together AI, VLLM
Managed Inference
When using a model for DynamoGuard, a managed inference endpoint is automatically set up (/chat/
). This endpoint should be used instead of the standard model endpoint in your application code and automatically handles the following workflow:
- User input is received by DynamoGuard
- DynamoGuard runs input policies on user input
- Based on policy results, user input is appropriately blocked, sanitized, or forwarded to the model.
- Model response is received by DynamoGuard
- DynamoGuard runs output policies on model response
- Based in policy results, model response is appropriately blocked, sanitized, or returned to the user
Custom Integration
DynamoGuard can also be integrated into your application in a custom manner using the /analyze/
endpoint. The analyze endpoint provides a one-off analysis of a piece of text and returns the policy results. The response object can be used in the application to determine the appropriate next step.